Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption.\nBut such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has\na negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of\nmatrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding\nframework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the\nconventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative\noptimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed\nmethod for supervised, unsupervised, and semisupervised scenarios.
Loading....